Compiling machine-independent parallel programs

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compiling and Optimizing Dynamic Parallel Programs

Data parallelism is an array based programming model that achieves massive parallelism through the lock-step execution of individual instructions simultaneously on all members of a parallel array. The semantics of data parallel programming languages are designed for extremely fine grained parallelism (on the level of a single arithmetic instruction), with tight synchronization. Data parallel la...

متن کامل

Compiling Data Parallel Programs to Message Passing Programs for Massively Parallel MIMD Systems

The currently dominating message-passing programming paradigm for MIMD systems is diicult to use and error prune. This is due to the lack of the shared memory and due to race condition errors that can hardly be debugged, especially for massively parallel systems. One approach to avoid explicit communication is using the data parallel programming model. This model stands for single threaded, glo...

متن کامل

Compiling machine learning programs via high-level tracing

We describe JAX, a domain-specific tracing JIT compiler for generating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the syst...

متن کامل

Compiling Array References with Aane Functions for Data-parallel Programs

It is an important research topic for parallelizing compilers to generate local memory access sequences and communication sets while compiling a data-parallel language into an SPMD(Single Program Multiple Data) program. In this paper, we present a scheme to e ciently enumerate the local memory access sequence and evaluate communication sets. We use a class table to store the information that ex...

متن کامل

Compiling Array References with Affine Functions for Data-Parallel Programs

An important research topic is parallelizing of compilers to generate local memory access sequences and communication sets while compiling a data-parallel language into an SPMD (Single Program Multiple Data) program. In this paper, we present a scheme to efficiently enumerate local memory access sequences and to evaluate communication sets. We use a class table to store information that is extr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM SIGPLAN Notices

سال: 1993

ISSN: 0362-1340,1558-1160

DOI: 10.1145/163114.163127